In discussing “alternative” medicine it’s impossible not to discuss, at least briefly, placebo effects. Indeed, one of the most common complaints we at SBM voice about clinical trials of alternative medicine is the lack of adequate controls — meaning adequate controls for placebo and nonspecific effects. Just type “acupuncture” in the search box in the upper left hand corner of the blog masthead, and you’ll pull up a number of discussions of acupuncture clinical trials that SBM bloggers have written over the last three years. If you check some of these posts, you’ll find that in nearly every case we spend considerable time and effort discussing whether the placebo or sham control used was adequate, noting that, the better the sham controls, the less likely acupuncture studies are to have a positive result.
Some of the less clueless advocates of “complementary and alternative medicine” (CAM) seem to realize that much of what they do relies on placebo effects. As a result, they tend to argue that what they do is useful and good because it’s “harnessing the placebo effect” for therapeutic purpose. One problem that advocates of SBM (like those of us at SBM who have taken an interest in this topic) tend to have with this argument is that it has always been assumed that a good placebo requires on some level at least some deception of the patient by either saying or implying that he is receiving an active treatment or medicine of some kind. This, we have argued, is a major ethical problem in using placebos in patients, and advocates of placebo medicine appear to agree, because they frequently argue that placebo effects can be harnessed without deception. Indeed, just last week there was an example of this argument plastered all over multiple news outlets and blogs in the form of stories and posts with headlines and titles like:
- Sugar Pills Work Even When People Know They Are Fake
- Fake Pills Can Work, Even If Patients Know It
- Irritable Bowel Syndrome: Placebo Works Even if Patients Know
- Placebos help, even when patients know about them
- Evidence that placebos could work even if you tell people they’re taking placebos
- Meet the Ethical Placebo: A Story that Heals
Except for one, every one of these articles or blog posts discussing a new study in PLoS ONE that purports to have found that placebo effects can be elicited in irritable bowel syndrome (IBS) without deception buys completely into that very thesis. For example, here is an example, taken from the Reuters story about this study:
Placebos can help patients feel better, even if they are fully aware they are taking a sugar pill, researchers reported on Wednesday on an unusual experiment aimed to better understand the “placebo effect.”
Nearly 60 percent of patients with irritable bowel syndrome reported they felt better after knowingly taking placebos twice a day, compared to 35 percent of patients who did not get any new treatment, they report in the Public Library of Science journal PLoS ONE.
“Not only did we make it absolutely clear that these pills had no active ingredient and were made from inert substances, but we actually had ‘placebo’ printed on the bottle,” Ted Kaptchuk of Harvard Medical School and Beth Israel Deaconess Medical Center in Boston, who led the study, said in a statement.
From Robert Langreth:
The latest study, a randomized trial of 80 women and men published in Plos One by Harvard researchers, shows that even when clinicians told women with irritable bowel syndrome they are getting fake pills, the fake pills still worked.
From CBS News:
Incredibly, according to a new study of patients with irritable bowel syndrome, the placebo effect, even when patients were in on the secret, worked almost as well as the leading medication on the market.
It’s also a lot cheaper. And the best part about placebo – no side effects.
From NPR:
However, conventional wisdom is that placebos require deception. In order to work, a patient has to think it’s an active drug. So the American Medical Association and other authorities frown on the use of placebos in everyday medical care for ethical reasons.
But along came the “honest placebo” study.
And, the most overblown report of all comes from Steve Silberman:
A provocative new study called “Placebos Without Deception,” published on PLoS One today, threatens to make humble sugar pills something they’ve rarely had a chance to be in the history of medicine: a respectable, ethically sound treatment for disease that has been vetted in controlled trials.
So, does this study show what all these stories claim that it shows? In a word, no.
When in doubt, go to the study
As you will see, the claims for this study by people like Silberman are overblown compared even to the claims made for it by its authors. I’ll show you what I mean by heading right over to PLoS ONE to find the article Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome. Notice right away that the authors of this study, led by Dr. Ted J. Kaptchuk of Harvard’s Osher Research Center, have framed their findings as having shown that it is possible to use the placebo effect without deception. The investigators. The Osher Center, for those of you not familiar with it, is Harvard’s center of quackademic medicine; only this time its investigators seem to be trying to do some real research into placebo effects. I’ll also give them credit for describing concisely the ethical conundrum that results when practitioners use placebos, at least as they are currently understood:
Directly harnessing placebo effects in a clinical setting has been problematic because of a widespread belief that beneficial responses to placebo treatment require concealment or deception. [3] This belief creates an ethical conundrum: to be beneficial in clinical practice placebos require deception but this violates the ethical principles of respect for patient autonomy and informed consent. In the clinical setting, prevalent ethical norms emphasize that “the use of a placebo without the patient’s knowledge may undermine trust, compromise the patient-physician relationship, and result in medical harm to the patient.” [4]
For purposes of this study, Kaptchuk et al wanted to determine if an open-label placebo pill with a persuasive rationale was more effective than no treatment at all in a randomized but completely unblinded study. In this particular study, it was made clear to the subjects in the experimental group that the pills they were getting were placebos — sugar pills. In fact, the pills were even named “placebo,” and the bottles containing them labeled as such! I’ll discuss the serious problems I found in the study in a moment, but first I’ll just take a moment to summarize the results. 92 patients with irritable bowel syndrome were screened and 80 patients randomized either to no treatment (43 subjects) or placebo (37 subjects). The primary outcome measurements were assessed using questionnaires, such as the IBS Global Improvement Scale (IBS-GIS) which asks participants: “Compared to the way you felt before you entered the study, have your IBS symptoms over the past 7 days been: 1) Substantially Worse, 2) Moderately Worse, 3) Slightly Worse, 4) No Change, 5) Slightly Improved, 6) Moderately Improved or 7) Substantially Improved.” Other scales were used as well. The trial lasted three weeks, and the results were as follows:
Open-label placebo produced significantly higher mean (±SD) global improvement scores (IBS-GIS) at both 11-day midpoint (5.2±1.0 vs. 4.0±1.1, p< .001) and at 21-day endpoint (5.0±1.5 vs. 3.9±1.3, p = .002). Significant results were also observed at both time points for reduced symptom severity (IBS-SSS, p = .008 and p = .03) and adequate relief (IBS-AR, p = .02 and p = .03); and a trend favoring open-label placebo was observed for quality of life (IBS-QoL) at the 21-day endpoint (p = .08).
I find it rather interesting that the way the authors chose to frame their results in the actual manuscript, compared to how they described their results to the media. One wonders whether saying that 60% of subjects taking placebos felt better compared to 35% receiving regular care feeling better sounded more convincing that citing improvement scores like the ones listed above. The reason is that I very much wonder whether the improvements reported are clinically significant. For instance, in the main result reported, those in the notreatment arm reported an average IBS-GIS of 4 (no change). In the Open Placebo arm, the average reported was 5 (Slightly Improved). How clinically relevant is this? I don’t know, but it sure seems to skirt the borders of clinical relevance and might not even achieve it. Come to think of it, the reason why news stories reported the results the way they did becomes clearer.
Be that as it may, I think I know why this study is in PLoS ONE, which is not known for publishing high quality clinical trials, rather than in a better clinical journal. New England Journal of Medicine material, this is most definitely not. (Of course, given the NEJM’s recent propensity towards a bit of credulity to woo, perhaps being in the NEJM doesn’t mean what it once did.) The first thing one notices, of course, is that there isn’t a single objective measure in the entire clinical trial. It’s all completely subjective. This is fine, as far as it goes, given that placebo effects affect primarily subjective outcomes, such as pain, anxiety, etc. It would have been quite interesting if the investigators had included along with their subjective measurements some objective measurements, such as number of bowel movements a day, time lost from work, or medication requirements. Anything. The authors even acknowledge this problem, pointing out that there are few objective measures for IBS. This may well be true, but it doesn’t mean it wouldn’t have been worth trying measures that are at least related to IBS. Then there’s the potential issue of reporting bias. Because this wasn’t a double-blinded trial, or even a single-blinded trial, it was impossible to hide from the subjects which group they were assigned to. Combine this with the lack of objective measures, and all that’s there is subjective measures prone to considerable bias, all for a condition whose natural history is naturally one of waxing and waning symptoms.
Lie to me, after all
As I mentioned in the introduction, this study is being touted all over the media and blogosphere as strong evidence that the placebo effect can be induced without deceiving patients. Indeed, the authors argue as much themselves in the discussion:
We found that patients given open-label placebo in the context of a supportive patient-practitioner relationship and a persuasive rationale had clinically meaningful symptom improvement that was significantly better than a no-treatment control group with matched patient-provider interaction. To our knowledge, this is the first RCT comparing open-label placebo to a no-treatment control. Previous studies of the effects of open-label placebo treatment either failed to include no-treatment controls [27] or combined it with active drug treatment. [28] Our study suggests that openly described inert interventions when delivered with a plausible rationale can produce placebo responses reflecting symptomatic improvements without deception or concealment.
No, the reason I say this is because, all their claims otherwise notwithstanding, this study doesn’t really tell us anything new about placebo effects. The reason is that, even though they did tell their subjects that the sugar pills they were being given were inert, the investigators also used suggestion to convince their subjects that these pills could nonetheless induce powerful “mind-body” effects. In other words, the investigators did the very thing they claimed they weren’t doing; they deceived their subjects to induce placebo effects by exaggerating the strength of the evidence for placebo effects and using rather woo-ish terminology (“self-healing,” for instance). Here’s how the investigators describe what they told their patients:
Patients who gave informed consent and fulfilled the inclusion and exclusion criteria were randomized into two groups: 1) placebo pill twice daily or 2) no-treatment. Before randomization and during the screening, the placebo pills were truthfully described as inert or inactive pills, like sugar pills, without any medication in it. Additionally, patients were told that “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.” The patient-provider relationship and contact time was similar in both groups. Study visits occurred at baseline (Day 1), midpoint (Day 11) and completion (Day 21). Assessment questionnaires were completed by patients with the assistance of a blinded assessor at study visits.
This is a description of the script that practitioners were to use when discussing these pills with subjects recruited to the study:
Patients were randomly assigned either to open-label placebo treatment or to the no-treatment control. Prior to randomization, patients from both groups met either a physician (AJL) or nurse-practitioner (EF) and were asked whether they had heard of the “placebo effect.” Assignment was determined by practitioner availability. The provider clearly explained that the placebo pill was an inactive (i.e., “inert”) substance like a sugar pill that contained no medication and then explained in an approximately fifteen minute a priori script the following “four discussion points:” 1) the placebo effect is powerful, 2) the body can automatically respond to taking placebo pills like Pavlov’s dogs who salivated when they heard a bell, 3) a positive attitude helps but is not necessary, and 4) taking the pills faithfully is critical. Patients were told that half would be assigned to an open-label placebo group and the other half to a no-treatment control group. Our rationale had a positive framing with the aim of optimizing placebo response.
How is this any different from what is known about placebo responses? I, for one, couldn’t find anything different. It’s right there in the Methods section: The authors might well have told subjects that they were receiving a sugar pill, but they also told them that this sugar pill would do wonderful things through the power of “mind-body” effects, as though it was entirely scientifically clear-cut that it would.
Finally, the investigators recruited subjects thusly:
Participants were recruited from advertisements for “a novel mind-body management study of IBS” in newspapers and fliers and from referrals from healthcare professionals. During the telephone screening, potential enrollees were told that participants would receive “either placebo (inert) pills, which were like sugar pills which had been shown to have self-healing properties” or no-treatment.
Even the authors had to acknowledge that this was a problem:
A further possible limitation is that our results are not generalizable because our trial may have selectively attracted IBS patients who were attracted by an advertisement for “a novel mind-body” intervention. Obviously, we cannot rule out this possibility. However, selective attraction to the advertised treatment is a possibility in virtually all clinical trials.
In other words, not only did Kaptchuk et al deceive their subjects to trigger placebo effects, whether they realize or will admit that that’s what they did or not, but they might very well have specifically attracted patients more prone to believing in the power of “mind-body” interactions. Yes, patients were informed that they were receiving a placebo, but it must be emphasized again and again that that knowledge was tainted by what the investigators also told them about what the placebo pills could do. After all, investigators told subjects in the placebo group that science says that the placebo pills they would take were capable of activating some sort of woo-ful “mind-body” healing process.
In fact, I would say that what Kaptchuk et al did was no different than what we know about what is required to induce placebo effects. They were also far more suggestive than the explanations that investigators conducting placebo-controlled clinical trials offer subjects during the recruitment process. Consider: In most clinical trials, investigators tell subjects that they will be randomized to receive either the medicine being tested or a sugar pill (i.e., placebo). This, patients are told, means that they have a 50-50 chance of getting a real medicine and a 50-50 chance of receiving the placebo. In explaining this, investigators in general make no claim that that the placebo pill has any effect whatsoever. In fact, in most clinical trials, subjects are explicitly told that it does not. In contrast, Kaptchuk et al explicitly tried to “optimize the placebo response” for purposes of the study by telling their subjects that the sugar pill activated some sort of mind-body response that would make them feel better — but only if they religiously took the sugar pills. Yes, they did tell the subjects that they didn’t have to believe in mind-body interactions to experience the healing response. But did it matter? I doubt it, because people with authority, whom patients tend to believe (namely doctors) also told subjects that there was strong evidence showing that these placebo pills activated some sort of powerful “mind-body” mechanism. This alone makes proclamations about how the investigators triggered placebo effects without deception — shall we say? — not exactly in line with the reality of the situation. A far better design would have included at least one more group, namely a group receiving the placebo but without all the suggestion about how it would activate “powerful mind-body” effects using a neutral script that simply said it was a sugar pill and wasn’t expected to do anything. Lacking that additional group, this study tells us very little that we didn’t already know.
Reality versus perception versus reporting
I’ve been very critical of how this study was reported, but I will admit that, actually, the overall design wasn’t that bad. After all, it’s a pilot study, and, as such, wasn’t that large. We shouldn’t expect too much from it, other than potentially intriguing results that generate hypotheses and justify further study. Rather, what I had a problem with were two issues: execution and spin.. Of the two, by far the biggest problem I have is with how investigators spun their results and how that spin was reported by the press as though this study were evidence that placebo effects can really be triggered without at least some degree of deception or suggestion to the patient. It shows nothing of the sort. That’s the problem.
Think of it this way. How is what Kaptchuk et al did when they came up with a “convincing rationale” for their sugar pills to relieve IBS symptoms any different than what a homeopath does when he concocts stories of “like cures like” and how water “remembers” the therapeutic molecules with which it’s been in contact? Or what a reiki practitioner does when he claims he can channel “healing energy” from the “universal source” through himself an into his clients for healing effect? Or when acupuncturists claim that they are “unblocking the flow of qi” to healing effect? Yes, I know that, to supporters of SBM, claims of unblocking qi, channeling energy, or using the memory of water are not scientifically compelling. In fact, they’re totally ridiculous. However, we often forget that to average lay people without a medical or scientific background, these woo explanations might well be quite reasonable sounding — compelling even. In the case of this study, the investigators did the same thing, only with a somewhat — only somewhat — less woo-filled narrative as to why their sugar pill should relieve the symptoms of their subjects.
One last note. Take a guess who funded this study. Go on. Where did the investigators get the money for this study?
That’s right. The National Center for Complementary and Alternative Medicine (NCCAM) funded the study.
Why am I not surprised? Actually, in all fairness, this is one of the better studies that NCCAM has funded, which should give you an idea of how poor most NCCAM studies are. Even so, it’s only just a so-so study. It has a somewhat intriguing finding that could well be due to differences between the experimental groups, reporting bias, and/or recruiting bias. Or it might even suggest a way to activate placebo effects with a minimum of deception, a minimum of violation of patient autonomy. Maybe. But ground-breaking or somehow demonstrating that the placebo effect can be activated “without deception”? Not quite..
For more, note that SBM co-blogger Peter Lipson has also commented on this study.